1,032 research outputs found

    The Effects of Cardiac Specialty Hospitals on the Cost and Quality of Medical Care

    Get PDF
    The recent rise of specialty hospitals -- typically for-profit firms that are at least partially owned by physicians -- has led to substantial debate about their effects on the cost and quality of care. Advocates of specialty hospitals claim they improve quality and lower cost; critics contend they concentrate on providing profitable procedures and attracting relatively healthy patients, leaving (predominantly nonprofit) general hospitals with a less-remunerative, sicker patient population. We find support for both sides of this debate. Markets experiencing entry by a cardiac specialty hospital have lower spending for cardiac care without significantly worse clinical outcomes. In markets with a specialty hospital, however, specialty hospitals tend to attract healthier patients and provide higher levels of intensive procedures than general hospitals.

    Concurrent Scheme

    Get PDF
    Journal ArticleThis paper describes an evolution of the Scheme language to support parallelism with tight coupling of control and data. Mechanisms are presented to address the difficult and related problems of mutual exclusion and data sharing which arise in concurrent language systems. The mechanisms are tailored to preserve Scheme semantics as much as possible while allowing for efficient implementation. Prototype implementations of the resulting language are described which have been completed. A third implementation is underway for the Mayfly, a distributed memory, twisted-torus communication topology, parallel processor, under development at the Hewlett-Packard Research Laboratories. The language model is particularly well suited for the Mayfly processor, as will be shown

    Compiling distributed C++

    Get PDF
    technical reportDistributed C++ (DC++) is a language for writing parallel applications on loosely coupled distributed systems in C++. Its key idea is to extend the C++ class into 3 categories: gateway classes which act as communication and synchronization entry points between abstract processors, classes whose instances may be passed by value between abstract processors via gateways, and vanilla C++ classes. DC++ code is compiled to C++ code with calls to the DC++ runtime system. The DC++ compiler wraps gateway classes with handle classes so that remote procedure calls are transparent. It adds static variables to value classes and produces code which is used to marshal and unmarshal arguments when these value classes are used in remote procedure calls. Value classes are deep copied and preserve structure sharing. This paper shows DC++ compilation and performance

    Using utilization profiles in allocation and partitioning for multiprocessor systems

    Get PDF
    Journal ArticleThe problems of multiprocessor partitioning and program allocation are interdependent and critical to the performance of multiprocessor systems. Minimizing resource partitions for parallel programs on partitionable multiprocessors facilitates greater processor utilization and throughput. The processing resource requirements of parallel programs vary during program, execution and are allocation dependent. Optimal resource utilization requires that resource requirements be modeled as variable over time. This paper investigates the use of program profiles in allocating programs and partitioning multiprocessor systems. An allocation method is discussed. The goals of this method are to (1) minimize program execution time, (2) minimize t h e total number of processors used, (3) characterize variation in processor requirements over the lifetime of a program, (4) to accurately predict the impact on run time of the number of processors available at any point in time and (5) to minimize fluctuations in processor requirements to facilitate efficient sharing of processors between partitions on a partitionable multiprocessor. An application to program partitioning is discussed that improves partition run times compared to other methods

    DPOS: A metalanguage and programming environment for parallel processors

    Get PDF
    Journal ArticleThe complexity and diversity of parallel programming languages and computer architectures hinders programmers in developing programs and greatly limits program portability. All MIMD parallel programming systems, however, address common requirements for process creation, process management, and interprocess communication. This paper describes and illustrates a structured programming system (DPOS) and graphical programming environment for generating and debugging high-level MIND parallel programs. DPOS is a metalanguage for defining parallel program networks based on the common requirements of distributed parallel computing that is portable across languages, modular, and highly flexible. The system uses the concept of stratification to separate process network creation and the control of parallelism form computational work. Individual processes are defined within the process object layer as traditional single threaded programs without parallel language constructs. Process networks and communication are defined graphically within the system layer at a high level of abstraction as recursive graphs. Communication is facilitated in DPOS by extending message passing semantics in several ways to implement highly flexible message passing constructs. DPOS processes exchange messages through bi-directional channel objects using guarded, buffered, synchronous and asynchronous communication semantics. The DPOS environment also generates source code and provides a simulation system for graphical debugging and animation of the programs in graph form

    A communication-ordered task graph allocation algorithm

    Get PDF
    technical reportThe inherently asynchronous nature of the data flow computation model allows the exploitation of maximum parallelism in program execution. While this computational model holds great promise, several problems must be solved in order to achieve a high degree of program performance. The allocation and scheduling of programs on MIMD distributed memory parallel hardware, is necessary for the implementation of efficient parallel systems. Finding optimal solutions requires that maximum parallelism be achieved consistent with resource limits and minimizing communication costs, and has been proven to be in the class of NP-complete problems. This paper addresses the problem of static allocation of tasks to distributed memory MIMD systems where simultaneous computation and communication is a factor. This paper discusses similarities and differences between several recent heuristic allocation approaches and identifies common problems inherent in these approaches. This paper presents a new algorithm scheme and heuristics that resolves the identified problems and shows significant performance benefits

    Visual threads: the benefits of multithreading in visual programming languages

    Get PDF
    technical reportAfter working with the CWave visual programming language, we discovered that many of our target domains required the ability to define parallel computations within a program. CWave has a strongly hierarchical model of computation, so it seemed like adding the ability to take a part of the hierarchy and execute it in parallel would provide a good way of solving the problem. This led us to the concept of the Visual Thread and its associated components. Effectively, the Visual Thread allows the programmer to specify a part of the hierarchy and execute that part in parallel with the rest of program. We have used this implementation in several domains and demonstrated that it allows easy mapping of real world problems into our language. It eliminates most of the complexities often associated with programming parallel applications. We have also used a first prototype of our code generation system to translate CWave into Promela which allows us to verify correctness properties of the programs

    Persistence is hard, then you die! or compiler and runtime support for a persistent common Lisp.

    Get PDF
    Journal ArticleIntegrating persistence into an existing programming language is a serious undertaking. Preserving the essence of t h e existing language, adequately supporting persistence, and maintaining efficiency require low-level support from the compiler and runtime systems. Pervasive, low-level changes were made to a Lisp compiler and runtime system to introduce persistence. The result is an efficient language which is worthy of the name Persistent Lisp.

    The Core Collapse Supernova Rate from the SDSS-II Supernova Survey

    Get PDF
    We use the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SNS) data to measure the volumetric core collapse supernova (CCSN) rate in the redshift range (0.03<z<0.09). Using a sample of 89 CCSN we find a volume-averaged rate of (1.06 +/- 0.19) x 10**(-4)/(yr Mpc**3) at a mean redshift of 0.072 +/- 0.009. We measure the CCSN luminosity function from the data and consider the implications on the star formation history.Comment: Minor corrections to references and affiliations to conform with published versio
    • …
    corecore